De-biasing the Lasso: Optimal Sample Size for Gaussian Designs

نویسندگان

  • Adel Javanmard
  • Andrea Montanari
چکیده

Performing statistical inference in high-dimensional models is an outstanding challenge. A major source of difficulty is the absence of precise information on the distribution of high-dimensional regularized estimators. Here, we consider linear regression in the high-dimensional regime p n and the Lasso estimator. In this context, we would like to perform inference on a high-dimensional parameters vector θ∗ ∈ R. Important progress has been achieved in computing confidence intervals and p-values for single coordinates θ∗ i , i ∈ {1, . . . , p}. A key role in these new inferential methods is played by a certain de-biased (or de-sparsified) estimator θ̂ that is constructed from the Lasso estimator. Earlier work establishes that, under suitable assumptions on the design matrix, the coordinates of θ̂ are asymptotically Gaussian provided the true parameters vector θ∗ is s0-sparse with s0 = o( √ n/ log p). The condition s0 = o( √ n/ log p) is considerably stronger than the one required for consistent estimation, namely s0 = o(n/ log p). Here we consider Gaussian designs with known or unknown population covariance. When the covariance is known, we prove that the de-biased estimator is asymptotically Gaussian under the nearly optimal condition s0 = o(n/(log p) ). Note that earlier work was limited to s0 = o( √ n/ log p) even for perfectly known covariance. The same conclusion holds if the population covariance is unknown but can be estimated sufficiently well, e.g. because its inverse is very sparse. For intermediate regimes, we describe the trade-off between sparsity in the coefficients θ∗, and sparsity in the inverse covariance of the design. ∗Data Sciences and Operations Department, Marshall School of Business, University of Southern California, Email: [email protected] †Department of Electrical Engineering and Department of Statistics, Stanford University. Email: montanar@ stanford.edu

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

L1-Regularized Least Squares for Support Recovery of High Dimensional Single Index Models with Gaussian Designs

It is known that for a certain class of single index models (SIMs) [Formula: see text], support recovery is impossible when X ~ 𝒩(0, 𝕀 p×p ) and a model complexity adjusted sample size is below a critical threshold. Recently, optimal algorithms based on Sliced Inverse Regression (SIR) were suggested. These algorithms work provably under the assumption that the design X comes from an i.i.d. Gaus...

متن کامل

Beyond Sub-Gaussian Measurements: High-Dimensional Structured Estimation with Sub-Exponential Designs

We consider the problem of high-dimensional structured estimation with norm-regularized estimators, such as Lasso, when the design matrix and noise are drawn from sub-exponential distributions. Existing results only consider sub-Gaussian designs and noise, and both the sample complexity and non-asymptotic estimation error have been shown to depend on the Gaussian width of suitable sets. In cont...

متن کامل

Sparse models and methods for optimal instruments with an application to eminent domain

We develop results for the use of LASSO and Post-LASSO methods to form firststage predictions and estimate optimal instruments in linear instrumental variables (IV) models with many instruments, p, that apply even when p is much larger than the sample size, n. We rigorously develop asymptotic distribution and inference theory for the resulting IV estimators and provide conditions under which th...

متن کامل

Lasso Methods for Gaussian Instrumental Variables Models

In this note, we propose to use sparse methods (e.g. LASSO, Post-LASSO, √ LASSO, and Post√ LASSO) to form first-stage predictions and estimate optimal instruments in linear instrumental variables (IV) models with many instruments, p, in the canonical Gaussian case. The methods apply even when p is much larger than the sample size, n. We derive asymptotic distributions for the resulting IV estim...

متن کامل

The Sparsity and Bias of the Lasso Selection in High-dimensional Linear Regression

Meinshausen and Buhlmann [Ann. Statist. 34 (2006) 1436–1462] showed that, for neighborhood selection in Gaussian graphical models, under a neighborhood stability condition, the LASSO is consistent, even when the number of variables is of greater order than the sample size. Zhao and Yu [(2006) J. Machine Learning Research 7 2541–2567] formalized the neighborhood stability condition in the contex...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015